Learning Capabilities of Recurrent Neural

نویسنده

  • Bhaskar DasGupta
چکیده

Recurrent neural networks are important models of neural computation This work relates the power of them to those of other conventional models of computation like Turing machines and nite automata and proves results about their learning capabilities Speci cally it shows a Probabilistic recurrent networks and Probabilistic turing ma chine models are equivalent b Probabilistic recurrent networks with bounded error probabilities are no more powerful than determistic nite automata c Deterministic recurrent networks have the capability of learning P complete language problems which are the hardest problems in P to parallelize d Restricting the weight threshold relationship in deterministic recurrent networks may allow the network to learn only weaker classes of lan guages the NC class Learning Capabilities of Recurrent Neural Networks Bhaskar DasGupta Department of Computer Science The Pennsylvania State University University Park PA

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Modeling with Recurrent Neural Networks using Generalized Mean Neuron Model

Abstract This paper presents the use of generalized mean neuron model (GMN) in recurrent neural networks (RNNs). The GMN includes a new aggregation function based on the concept of generalized mean of all the inputs to the neuron. Learning is implemented on-line, based on input-output data using an alternative approach to recurrent backpropagation learning algorithm. The learning and generaliza...

متن کامل

Learning Context-free Grammars: Capabilities and Limitations of a Recurrent Neural Network with an External Stack Memory

This work describes an approach for inferring De-terministic Context-free (DCF) Grammars in a Connectionist paradigm using a Recurrent Neu-ral Network Pushdown Automaton (NNPDA). The NNPDA consists of a recurrent neural network connected to an external stack memory through a common error function. We show that the NNPDA is able to learn the dynamics of an underlying push-down automaton from exa...

متن کامل

About Learning in Recurrent Bistable Gradient Networks

Recurrent Bistable Gradient Networks [1], [2], [3] are attractor based neural networks characterized by bistable dynamics of each single neuron. Coupled together using linear interaction determined by the interconnection weights, these networks do not suffer from spurious states or very limited capacity anymore. Vladimir Chinarov and Michael Menzinger, who invented these networks, trained them ...

متن کامل

Adaptive Learning of Linguistic Hierarchy in a Multiple Timescale Recurrent Neural Network

Recent research has revealed that hierarchical linguistic structures can emerge in a recurrent neural network with a sufficient number of delayed context layers. As a representative of this type of network the Multiple Timescale Recurrent Neural Network (MTRNN) has been proposed for recognising and generating known as well as unknown linguistic utterances. However the training of utterances per...

متن کامل

Levels of Dynamics and Adaptive Behavior in Evolutionary Neural Controllers

Two classes of dynamical recurrent neural networks, Continuous Time Recurrent Neural Networks (CTRNNs) (Yamauchi and Beer, 1994) and Plastic Neural Networks (PNNs) (Floreano and Urzelai, 2000) are compared on two behavioral tasks aimed at exploring their capabilities to display reinforcement-learning like behaviors and adaptation to unpredictable environmental changes. The networks report simil...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1991